Skip to content

Conversation

@CardSorting
Copy link

This PR implements ChromaTransformer2DModel and ChromaPipeline as separate classes, addressing the feedback from DN6 about creating dedicated implementations rather than using variants within FluxTransformer2DModel.

Changes Made:

Core Implementation

  • ChromaTransformer2DModel: New transformer class specifically for Chroma, separate from FluxTransformer2DModel
  • ChromaPipeline: Dedicated pipeline without CLIP encoder (T5-only) for Chroma's requirements
  • Single file converter: Added support for loading Chroma checkpoints from safetensors files
  • Attention masking: Implemented proper attention masking for Chroma's specific requirements

Package Integration

  • Updated all __init__.py files to export new classes
  • Added proper imports in src/diffusers/__init__.py
  • Integrated with existing single file loading infrastructure

Example Usage

  • examples/chroma_generation.py: Comprehensive example script showing how to use Chroma model
  • Demonstrates both HuggingFace Hub and single file loading methods
  • Includes optimization techniques (CPU offloading, VAE slicing/tiling)

Technical Details:

  • Uses T5 XXL encoder instead of CLIP
  • Compatible with FLUX VAE
  • Supports FlowMatchEulerDiscreteScheduler
  • Implements proper device and dtype handling
  • Memory-efficient optimizations included

This implementation provides a clean, separate architecture for Chroma while maintaining compatibility with the existing diffusers ecosystem.

- Implement ChromaTransformer2DModel as a separate class from FluxTransformer2DModel
- Create dedicated ChromaPipeline without CLIP encoder (T5-only)
- Add single file converter for Chroma checkpoints
- Implement proper attention masking for Chroma requirements
- Export new classes in package structure

This addresses the feedback from DN6 about creating separate classes
rather than using variants within FluxTransformer2DModel.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant